![]() APPARATUS TO ASSIST IN IMAGING DIAGNOSIS, METHOD FOR DATA COLLECTION, METHOD TO ASSIST IN IMAGING DI
专利摘要:
they are provided: an aid to diagnostic imaging (100) capable of assisting in the diagnosis of an endoscopic image captured by an endoscopist; a method of data collection; a method of aid to diagnostic imaging; and an image diagnosis aid program. the diagnostic imaging aid device (200) is provided with: a lesion evaluation unit (20) that evaluates, through a convolutional neural network, the name and position of a lesion that is present in an endoscopic image of the system digestive tract of a patient captured by an endoscopic image-forming device of the digestive system and information about its accuracy; and a presentation control unit (30) that performs the control for the generation of an image resulting from the analysis in which the name and position of the lesion and its precision are shown and for the presentation of the endoscopic image of the digestive system. in the convolutional neural network, the learning process is carried out based on the names and positions of the lesions that are present in a plurality of endoscopic images of the tumor of the digestive system predetermined by obtaining characteristics of atrophy conditions, intestinal metaplasia, bloating or depression and color tone of a mucous membrane. 公开号:BR112020008774A2 申请号:R112020008774-2 申请日:2018-10-30 公开日:2020-12-22 发明作者:Tomohiro Tada;Kazuharu AOYAMA;Toshiaki HIRASAWA;Tsuyoshi Ozawa;Toshiyuki YOSHIO 申请人:Japanese Foundation For Cancer Research;Ai Medical Service Inc.; IPC主号:
专利说明:
[0001] [0001] The present invention relates to a diagnostic imaging support device, a data collection method, a diagnostic imaging support method, and a training support program diagnostic imaging. [0002] [0002] Cancer is a disease that is probably the leading cause of death in the world. [0003] [0003] Endoscopy of digestive organs (in particular, endoscopy of the upper gastrointestinal tract: EGD) is a standard method for the diagnosis of gastric cancer, however it is said to generate a false negative rate of 26% when detecting cancer gastric by observation using EGD (see NPL 1), and gastric cancers are not diagnosed with high frequency. In addition, the majority of gastric cancers are the result of atrophic mucosa and some early gastric cancers show only subtle morphological changes and are difficult to distinguish from the background mucosa with atrophic changes. Less experienced endoscopists tend not to notice gastric cancer. For this reason, special training and experience are required for the endoscopist to properly detect gastric cancer. However, it is said that a diagnostic experience with 10,000 images and 10 years of experience is necessary to train an endoscopist to have a certain level of experience. [0004] [0004] In endoscopy of Organs digestive organs, many endoscopic images are collected. [0005] [0005] In addition, diagnosis based on such endoscopic images is the so-called subjective determination based on experience and observation and can cause several false positive decisions and false negative decisions. In addition, the best performance of a medical device is achieved only when both the performance conditions of the device itself and the reliable operation of the operator are satisfied. [0006] [0006] In recent years, AI using in-depth learning has attracted attention in several medical fields and there are several reports that AI can perform diagnostic imaging in medical fields, including radiation oncology, skin cancer classification, retinopathy diabetic, histological classification of gastric biopsy and characterization of colonic lesions with ultra-amplification endoscopy, in favor of the specialists. In particular, it has been proven that, at the microscopic / endoscopic level, AI can provide precision equivalent to that of a specialist (see NPL 2). [0007] [0007] PTL 1 Japanese Patent Application Published No. 2017-045341 PTL 2 Japanese Patent Application Published No. 2017-067489 [0008] [0008] NPL 1 Hosokawa O et al., Hepatogastroenterology. 2007; 54 (74): 442-4. [0009] [0009] As described above, it is suggested that AI image recognition is comparable to that of a human specialist. In normal endoscopy of Organs digestive organs, however, diagnostic support technology using AI's ability to diagnose with endoscopic images has not yet been introduced in medical situations and is now expected to be put to practical use in the future. [0010] [0010] It is an objective of the present invention to provide a device to support diagnostic imaging, a method of data collection, a method to support diagnostic imaging and a program to support diagnostic imaging and are capable of assist an endoscopist in diagnosing with endoscopic images. [0011] [0011] An apparatus for supporting diagnostic imaging in accordance with the present invention includes: an injury estimation section that estimates a name and location of an injury present in an endoscopic image of an individual's digestive organ and information about the certainty of the name and location of the lesion using a convolutional neural network, the endoscopic image of the digestive organ being captured by an endoscopic image capture device of the digestive organ; and a presentation control section that performs the control to generate an image resulting from the analysis showing the name and location of the injury and the certainty of the name and location of the injury and to present the image resulting from the analysis on the endoscopic image of the digestive organ , in which the convolutional neural network undergoes a learning process based on the names of the lesion and lesion lesion sites present in a plurality of endoscopic images of the digestive organ tumor, the lesion names and the lesion lesion sites being determined in advance by determining the characteristic of atrophy, intestinal metaplasia, swelling or depression of the mucosa and a condition of the mucosal color tones. [0012] [0012] A data collection method in accordance with the present invention is a method for collection, using the diagnostic imaging support device described above, the result presented in the presentation control section as data related to an injury from the gastrointestinal tract to an individual's gastrointestinal tract. [0013] [0013] A method of supporting diagnostic imaging according to the present invention is a method that uses a device, the device including: an injury estimation section that estimates a name and location of an injury present in an image endoscopic examination of an individual's digestive organ and information on the certainty of the name and location of the injury using a convolutional neural network, the endoscopic image of the digestive organ being captured by an endoscopic image capture device of the digestive organ, and a control section presentation of the control to generate an image resulting from the analysis showing the name and location of the injury and the certainty of the name and location of the injury and to present the image resulting from the analysis on the endoscopic image of the digestive organ, and the method of support for the formation of diagnostic imaging, including subjecting the convolutional neural network to a learning process based on the names of the lesion and lesion sites of the lesions present and m a plurality of endoscopic images of the tumor of the digestive organ, the names of the lesion and the lesion lesion sites being determined in advance by obtaining the characteristic of atrophy, intestinal metaplasia, swelling or depression of the mucosa and a condition of the tones of mucosa color. [0014] [0014] A diagnostic imaging support program in accordance with the present invention is a program to make a computer perform: a process for estimating a name and location of an injury present in an endoscopic image of the organ digestive tract of an individual and information on the certainty of the name and location of the injury using a convolutional neural network, the endoscopic image of the digestive organ being captured by an endoscopic image capture device of the digestive organ; and a process for carrying out control in order to generate an image resulting from the analysis showing the name and location of the lesion and the certainty of the name and location of the lesion and to present the image resulting from the analysis on the endoscopic image, in which the convolutional neural network undergoes a learning process based on the names of the lesion and lesion lesion sites present in a plurality of endoscopic images of the digestive organ tumor, the lesion names and lesion lesion sites being determined in advance through obtaining the characteristic of atrophy, intestinal metaplasia, swelling or depression of the mucosa and a condition of the mucosal color tones. [0015] [0015] The standard for determining using the characteristic of the lesion sites (atrophy, intestinal metaplasia, swelling or depression of the mucosa and a condition of the mucosal color tones) according to the present invention can be established by an endoscopist experienced with high precision and is described in detail, for example, in a book written by one of the present inventors ("Detection and Diagnostic of Early Gastric Cancer - Using Conventional Endoscopy", Toshiaki Hirasawa / Hiroshi Kawachi (authors), Junko Fujisaki ( supervising editor), Nihon Medical Center, 2016). [0016] [0016] According to the present invention, it is possible to provide a technique to assist an endoscopist in the diagnosis with endoscopic images. [0017] [0017] FIG. 1 is a block diagram illustrating a generic configuration of an apparatus for supporting diagnostic imaging according to this embodiment; FIG. 2 is a diagram illustrating a hardware configuration of the diagnostic imaging apparatus in accordance with this embodiment; FIG. 3 is a diagram illustrating a configuration of a convolutional neural network according to this embodiment; FIG. 4 is a diagram illustrating an example of an endoscopic image with an image resulting from the analysis shown therein in accordance with this embodiment; FIG. 5 is a diagram illustrating the characteristics of patients and injuries related to endoscopic images used for the assessment of data sets; [0018] [0018] The following describes an embodiment of the present invention in detail with reference to the drawings. [0019] [0019] [Generic configuration of the diagnostic imaging support device] First, a configuration of the diagnostic imaging support device (100) in accordance with this realization will be described. FIG. 1 is a block diagram illustrating a generic configuration of the diagnostic imaging apparatus (100). FIG. 2 is a diagram illustrating an example of hardware configuration of the device supporting diagnostic imaging [0020] [0020] The diagnostic imaging support device (100) is used for endoscopy of Organs digestive organs (for example, the esophagus, stomach, duodenum, large intestine, etc.) and assists a doctor (for example, an endoscopist) in diagnosis based on endoscopic images by using the ability to form endoscopic images for the diagnosis of a convolutional neural network (CNN). The diagnostic imaging apparatus (100) is connected to the endoscopic imaging device (200) (corresponding to a "digestive organ endoscopic imaging device" of the present invention) and display device (300) . [0021] [0021] Examples of the endoscopic image capture device (200) include an electronic endoscope containing an internal imaging section (also called a videoscope) and an endoscope equipped with a camera which is an optical endoscope with a camera containing a section of internal image formation. The endoscopic image capture device (200) is inserted, for example, through an individual's mouth or nose to a digestive organ to capture an image of a target site for diagnosis in the digestive organ. Then, the endoscopic image capture device (200) produces D1 endoscopic image data (still image) indicating an endoscopic image captured from the target diagnosis site in the digestive organ (corresponding to an "endoscopic image of the digestive organ" of the present invention) for the diagnostic imaging support device (100). In place of the D1 endoscopic image data, an endoscopic moving image can be used. [0022] [0022] Examples of the presentation device (300) include a liquid crystal display, which shows an image resulting from the analysis, which is the result of the device supporting the diagnostic image formation (100), for the doctor in an identifiable way. [0023] [0023] The diagnostic imaging support device (100) is a computer including main components such as CPU (“Central Processing Unit” (101), ROM (“Read Only Memory” - Memory Read Only) (102), RAM (“Random Access Memory” (103), external storage device (for example, a flash memory) (104), communication interface (105) and GPU ( “Graphics Processing Unit” (106). [0024] [0024] The functions of the device for supporting the diagnostic imaging (100) are implemented, for example, by the CPU (101) with reference to a control program (for example, a support program for the diagnostic imaging), various data, for example, endoscopic image data, teaching data, modeling data (such as structured data and weighted parameters learned) of the convolutional neural network) and so on, which are stored in ROM (102), RAM (103 ), external storage device (104) or similar. RAM (103) functions, for example, as a work area or temporary data storage area. [0025] [0025] All or some of the functions can be implemented by processing performed by a DSP ("Digital Signal Processor" - Digital Signal Processor) in place of or in addition to the processing performed by the CPU. Also, all or some of the functions can be implemented by processing performed by a dedicated hardware circuit in place of or in addition to processing performed by software. [0026] [0026] As illustrated in FIG. 1, the diagnostic imaging support device (100) includes the endoscopic imaging section (10), injury estimation section (20) and presentation control section (30). The learning device (40) has the function of generating modeling data (such as structured data and weighted parameters learned) of the convolutional neural network, which are used in the device to support diagnostic imaging (100). [0027] [0027] [Image acquisition section] The image acquisition section (10) obtains the D1 endoscopic image data produced by the endoscopic image capture device (200). Then, the imaging section (10) provides the data D1 endoscopic image data obtained for the injury estimation section (20). The imaging section (10) can obtain the D1 endoscopic image data directly from the endoscopic image capture device (200) or it can obtain the D1 endoscopic image data stored in the external storage device (104) or the image data endoscopic image D1 provided via a line with the Internet or similar. [0028] [0028] [Injury Estimation Section] The injury estimation section (20) estimates the name of an injury (name) and the location of the injury (location) of an injury present in an endoscopic image represented by endoscopic image data D1 provided from the endoscopic imaging section (10) and also estimates the certainty of the name of the lesion and the location of the lesion by using the convolutional neural network. Then, the injury estimation section (20) provides the presentation control section (30) with D1 endoscopic image data produced by the endoscopic imaging section (10) and D2 estimate result data indicating the results of the estimate of the name of the injury, the location of the injury and the certainty. [0029] [0029] In this embodiment, the injury estimation section (20) estimates a probability rating as an index indicating the certainty of an injury name and an injury location. The probability rating is represented by a value greater than 0 and less than or equal to 1. A higher probability rating indicates certainty of an injury name and a higher injury location. [0030] [0030] The probability classification is an example of an index indicating the certainty of an injury name and an injury location. Any other suitable index can be used. For example, the probability rating can be represented by a value from 0% to 100% or it can be represented by any one of several levels of value. [0031] [0031] A convolutional neural network is a type of direct neural network ("feedforward") and is based on the findings regarding the structure of the brain's visual cortex. The convolutional neural network basically presents a structure in which a convolutional layer responsible for obtaining the local characteristic of an image and a pooling layer (subsampling layer) for the characteristics summarized for each local region are repeated. Each layer of the convolutional neural network has a plurality of neurons and is arranged in such a way that each neuron corresponds to that of the visual cortex. The function of each neuron has an input and an output signal. [0032] [0032] FIG. 3 is a diagram illustrating a configuration of a convolutional neural network according to this embodiment. The modeling data (such as structured data and weighted parameters learned) of the convolutional neural network are stored in the external storage device (104) together with the diagnostic imaging training support program. [0033] [0033] As illustrated in FIG. 3, the convolutional neural network has, for example, a feature-obtaining section (Na) and an identification section (Nb). The characteristic obtaining section (Na) performs a process of obtaining image characteristics from an input image (endoscopic image data D1). The identification section (Nb) produces image-related estimation results from the image characteristics obtained by the characteristic obtaining section (Na). [0034] [0034] The section for obtaining the characteristic (Na) consists of a plurality of layers for obtaining the characteristic value (Na1, Na2, ...) that are hierarchically connected to each other. The layers for obtaining the characteristic value (Na1, Na2, ...) each include a convolution layer, an activation layer and a pooling layer. [0035] [0035] The layer of obtaining the characteristic value (Na1) as the first layer performs a scanning scan on an input image in order to scan each predetermined size. Then, the characteristic value obtaining layer (Na1) performs a characteristic value obtaining process on the scanned data using the convolution layer, the activation layer and the pooling layer to obtain the characteristic values of the input image. . The characteristic value obtaining layer (Na1) as the first layer obtains, for example, relatively simple unique characteristic values such as horizontally extending line characteristics or diagonally extending line characteristic values. [0036] [0036] The layer of obtaining the characteristic value (Na2) as the second layer performs, for example, a trace scan on an image (also referred to as a characteristic map) of the input of the layer of obtaining the characteristic value ( Na1) as the preceding layer to scan each predetermined size. Then, the layer of obtaining the characteristic value (Na2) performs a process of obtaining the characteristic value in the scanned data using the convolution layer, the activation layer and the pooling layer in a similar way to the characteristic values obtained from the image. input. The characteristic value obtaining layer (Na2) as the second integrates a plurality of characteristic values obtained by the characteristic value obtaining layer (Na1) as the first layer by reference to a positional and similar relationship, of the plurality of values of characteristics in order to obtain the values of composite characteristics of greater order. [0037] [0037] The characteristic value obtaining layers such as the second layer and subsequent layers (in FIG. 3, only two characteristic value obtaining layers (Na) are illustrated, for convenience of description) perform a similar processing to that of layer to obtain characteristic value (Na2) as the second layer. Then, the output (respective map values in a plurality of feature maps) of the final feature value acquisition layer is entered in the identification section (Nb). [0038] [0038] The identification section (Nb) consists, for example, of a multilayer perceptron in which a plurality of completely connected layers are connected hierarchically to each other. [0039] [0039] The layer completely connected on the input side of the identification section (Nb) is completely connected to the respective map values in the plurality of characteristic maps, which are obtained from the characteristic obtaining section (Na), performs the operation multiplies-and-accumulates in the respective values while applying the weight change coefficients and produces the result. [0040] [0040] The completely connected layer in the subsequent hierarchical layer of the identification section (Nb) is completely connected to values produced from the elements in the completely connected layer in the previous hierarchical layer and performs the multiply-and-accumulate operation in the respective values while applying weight change coefficients. Then, the identification section (Nb) presents in its final stage a layer (for example, softmax function or similar) from which an injury name and injury site of an injury present in the endoscopic image and the probability classification ( certainty) of the injury name and the injury site are produced. [0041] [0041] The convolutional neural network can have an estimation function such that the convolutional neural network is subjected to a learning process using the reference data (hereinafter referred to as "teaching data") obtained in advance by an experienced endoscopist through marking process in such a way that the desired estimation results (here, an injury name, an injury site and a probability rating) can be produced from an input endoscopic image. [0042] [0042] The convolutional neural network according to this realization is configured to receive the D1 endoscopic image data entry ("entry" in FIG. 3) and provide an injury name, an injury site and a probability rating for an image characteristic of an endoscopic image represented by endoscopic image data D1 as estimation result data D2 ("output" in FIG. 3). [0043] [0043] More preferably, the convolutional neural network may have a configuration capable of receiving input, in addition to D1 endoscopic image data, information related to age, gender, geographical area or past medical history (for example, this configuration is provided as an input element of the identification section (Nb)). Since the importance of real data in the real clinical situation is recognized in particular, the addition of information about the attributes of such a patient enables the development of the convolutional neural network for a more useful system in the real clinical situation. That is, the characteristics of the endoscopic image have a correlation with information related to age, gender, geographic area or past medical history and causing the convolutional neural network to refer to information about the patient's attributes such as age in addition to the data endoscopic image D1, provides a configuration capable of estimating a lesion name and lesion site with greater precision. This method is a matter to be incorporated particularly when the present invention is used internationally, since the state of a disease may differ depending on the geographical area or race. [0044] [0044] In addition to the processing performed by the convolutional neural network, the lesion estimation section (20) can perform a pre-processing, examples of which include processing for conversion to the size or aspect ratio of the endoscopic image, color separation processing of the endoscopic image, processing for color conversion of the endoscopic image, color extraction processing and brightness gradient processing. [0045] [0045] [Presentation Control Section] The presentation control section (30) generates an analysis result image showing an injury name, an injury location and a probability rating indicated by the D2 estimate result data, which they are produced from the injury estimation section (20), in an endoscopic image represented by the D1 endoscopic image data produced from the injury estimation section (20). Then, the presentation control section (30) produces endoscopic image data D1 and image data resulting from analysis D3 representing the image resulting from the analysis generated for the presentation device (300). In this case, a digital image processing system to highlight the structure of a part of the lesion of the endoscopic image, showing a part of the lesion in a highlighted color providing a high contrast, providing a high definition or similar, can be connected to perform the processing in order to assist the observer in gaining understanding and determination. [0046] [0046] The presentation device (300) shows an image resulting from the analysis represented by the image data resulting from the D3 analysis in an endoscopic image represented by the D1 endoscopic image data produced from the presentation control section (30). The endoscopic image and the image resulting from the analysis shown are used for the double check operation of the endoscopic image, for example. In this realization, in addition, it takes a very short time to show each endoscopic image and each image resulting from the analysis and thus, in addition to the double-checking operation of the endoscopic images, the use of a moving endoscopic image can assist a doctor in real-time diagnostics. [0047] [0047] FIG. 4 is a diagram illustrating an example of an endoscopic image with an image resulting from the analysis shown in accordance with this embodiment. As illustrated in FIG. 4, the image resulting from the analysis shows a rectangular frame (50) indicating an injury site (range) estimated by the injury estimation section (20), an injury name (early stage cancer: early stage stomach cancer) and a probability rating (0.8). In this embodiment, a rectangular frame indicating an injury site (range) estimated by the injury estimation section (20) is shown in yellow when the probability rating is greater than or equal to a certain threshold (for example, 0.4) , in order to draw the attention of the doctor who examines the image resulting from the analysis to the rectangular frame. That is, the presentation control section (30) changes the presentation style of the identification information of the injury site (in this embodiment, a rectangular frame) identifying an injury site in an image resulting from the analysis according to the classification of probability indicated by the D2 estimate result data produced from the injury estimate section (20). The rectangular frame (52) indicates, for reference, a lesion site (band) diagnosed as gastric cancer by the doctor and is not shown in the image as a result of the actual analysis, indicating that the same result as that of determining a well-experienced endoscopist is obtained . [0048] [0048] [Learning Device] The learning device (40) receives input from D4 teaching data stored in an external storage device (not shown) and performs a learning process on a convolutional neural network of the learning device ( 40) in such a way that the convolutional neural network of the lesion estimation section (20) can estimate a lesion site, a lesion name and a probability classification from the D1 endoscopic image data. [0049] [0049] In this embodiment, the learning apparatus (40) performs a learning process using, as teaching data D4, the endoscopic images of the individual's digestive organs (corresponding to the "endoscopic images of the digestive organ tumor" of the present invention) , which are captured by the endoscopic image capture device (200) and the names of the lesion and lesion locations of the lesions present in the endoscopic images, which are determined in advance by the doctor by obtaining an atrophy characteristic, intestinal metaplasia, swelling or depression of the mucosa and a condition of the mucosal color tones. Specifically, the learning apparatus (40) performs a learning process of the convolutional neural network in order to reduce the error (also called loss) of the data produced from a correct answer value (a name of the injury and a location of the injury). injury) obtained when an endoscopic image is entered into the convolutional neural network. [0050] [0050] In this embodiment, the endoscopic images used as teaching data D4 include the endoscopic images captured with white light illumination of the individual's digestive organs, the endoscopic images captured with dyes (for example, indigo carmine or an iodine solution) applied to the individual's digestive organs and endoscopic images captured with narrow band light illumination, for example, NBI (“Narrow Band Imaging” - Narrow Band Imaging) or BLI narrow band light (“Blue Laser Imaging” - Blue Laser Image Formation) of the individual's digestive organs. In the learning process, the endoscopic images used as teaching data D4 are obtained mainly from a database in a high-grade special cancer treatment hospital in Japan, by a doctor with high experience in diagnosis and treatment, who is certified by the Japan Gastroenterological Endoscopy Society examining all the images in detail, selecting the images and marking the lesion sites of the lesions through precise manual processing. The precise management of the teaching data D4 (endoscopic image data), which serve as reference data, is directly connected to the accuracy of the analysis device supporting the diagnostic image formation (100) and, thus, the image selection , identification of injury and marking to obtain the characteristic by an expert endoscopist having a lot of experience is a very important step. [0051] [0051] The D4 teaching data of the endoscopic images can be the pixel value data or data subjected to a predetermined or similar color conversion processing. In addition, the texture characteristics, shape characteristics, spreading characteristics and the like obtained in the pre-processing can be used. Teaching data D4 can be associated with information related to age, gender, geographical area or past medical history in addition to endoscopic image data, in order to carry out a learning process. [0052] [0052] The learning apparatus (40) can perform a learning process using a known algorithm. The learning apparatus (40) performs a learning process on the convolutional neural network using, for example, a known backpropagation to adjust the network parameters (such as weight coefficient and bias). The modeling data (such as structured data and learned weight parameters) of the convolutional neural network on which a learning process is performed by the learning device (40) is stored in the external storage device (104) together with, for example, the diagnostic imaging training support program. [0053] [0053] As described above in detail, in this embodiment, the diagnostic imaging support device (100) includes a lesion estimation section that estimates a name and location of an injury present in an endoscopic image of the digestive organ of an individual and information about the certainty of the name and location of the injury using a convolutional neural network, the endoscopic image of the digestive organ being captured by an endoscopic image capture device of the digestive organ and a presentation control section that performs the control in order to generate an image resulting from the analysis showing the name and location of the injury and the certainty of the name and location of the injury and to present the image resulting from the analysis on the endoscopic image of the digestive organ. The convolutional neural network undergoes a learning process based on the names of the lesion and lesion sites of the lesions present in a plurality of endoscopic images of the digestive organ tumor, the names of the lesion and the lesion lesion sites being determined with advance by obtaining atrophy characteristic, intestinal metaplasia, swelling or depression of the mucosa and a condition of the mucosal color tones. [0054] [0054] In accordance with this realization with the configuration described above, a convolutional neural network is learned based on a plurality of endoscopic images of digestive organs, which are obtained in advance for each of the plurality of individuals and based on the results of the diagnosis of the lesion names and injury lesion sites obtained in advance for each of the plurality of individuals. In this way, the names of the lesion and the lesion sites in Organs digestive organs of a new individual can be estimated in a short time with precision substantially comparable to that of an experienced endoscopist. In endoscopy of Organs digestive organs, in this way, it is possible to strongly assist an endoscopist in the diagnosis based on endoscopic images by using the capacity of endoscopic image formation to diagnose the convolutional neural network according to the present invention. In the real clinical situation, the endoscopist is able to use the convolutional neural network directly as a support tool for diagnosis in a consultation room and is also able to use endoscopic images transmitted from a plurality of consultation rooms for a service central diagnostic support or perform remote control over an Internet line to use a diagnostic support service for an organization at a remote location. [0055] [0055] The embodiment described above merely provides specific examples for carrying out the present invention and such examples should not be considered as the technical scope of the present invention in a restricted form. That is, the present invention can be carried out in a number of ways without departing from its essence or main characteristics. [0056] [0056] [Experimental Example] Finally, an evaluation test for the determination of advantageous effects achieved with the configuration of the realization described above will be described. [0057] [0057] [Preparation of Learning Data Sets] Endoscopic EGD images obtained from April 2004 to December 2016 were prepared as learning data sets (teaching data) to be used for learning a neural network convolutional in a device to support diagnostic imaging. EGD was performed for scanning in daily clinical practice or preoperative examination and endoscopic images were collected using standard endoscopes (GIF-H290Z, GIF-H290, GIF-XP290N, GIF-H260Z, GIF-Q260J, GIF- XP260, GIF-XP260NS, GIF-N260 etc., Olympus Medical Systems Corp., Tokyo) and endoscopic video systems (EVIS LUCERA CV-260 / CLV-260 and EVIS LUCERA ELITE CV-290 / CLV-290SL, Olympus Medical Systems Corp.). [0058] [0058] Endoscopic images serving as learning data sets included endoscopic images captured with white light illumination of the individual's digestive organs, endoscopic images captured with dyes (eg indigo carmine or an iodine solution) applied to Organs digestive organs of the individual individual and endoscopic images captured with narrowband light illumination (for example, NBI narrowband light or BLI narrowband light) of the individual's digestive organs. Low quality endoscopic images due to low stomach expansion caused by insufficient air supply, bleeding after biopsy, halo formation, lens blurring, defocusing, mucus or the like, were excluded from the learning data sets. [0059] [0059] Finally, 13584 endoscopic images for 2639 histologically proven gastric cancers were collected as learning data sets. A gastric cancer specialist and certified by the Japan Gastroenterological Endoscopy Society (with 10 years of experience in a cancer hospital and experienced in the diagnosis of gastric cancers in 6000 or more cases) performed the precise manual setting of appointment to obtain characteristic of the lesion names and lesion sites of all gastric cancers (early stage cancer or advanced stage cancer) in the collected endoscopic images and prepared learning data sets. [0060] [0060] [Learning / Algorithm] In order to build a support device for the formation of diagnostic images, a convolutional neural network based on VGG (https://arxiv.org/abs/1409.1556) and consisting of 16 or more was used layers. The Caffe deep learning framework, which was developed at the Berkeley Vision and Learning Center (BVLC), was used for learning and an assessment test. [0061] [0061] [Preparation of the Evaluation Test Data Sets] In order to evaluate the diagnostic accuracy the diagnostic imaging support device based on the built convolutional neural network, 2296 endoscopic images (stomach) for 69 patients (77 lesions gastric cancer) that underwent EGD as a normal clinical exam at JFCR's Cancer Institute Hospital in Ariake from March 1, 2017 to March 31, 2017 were collected as assessment test data sets. As a result, 1 gastric cancer lesion was present in 62 patients, 2 gastric cancer lesions were present in 6 patients and 3 gastric cancer lesions were present in 1 patient. All EGD procedures were performed using a standard endoscope (GIF-H290Z, Olympus Medical Systems Corp., Tokyo) and a standard endoscopic video system (EVIS LUCERA ELITE CV-290 / CLV-290SL, Olympus Medical Systems Corp. ). In EGD, with observation through the stomach, endoscopic images were captured. The number of images captured was 18 to 69 per patient. [0062] [0062] AFIG. 5 is a diagram illustrating the characteristics of patients and injuries related to endoscopic images used for the assessment test data sets. As illustrated in FIG. 5, the average tumor sizes (diameters) was 24 mm and the range of tumor sizes (diameters) was 3 to 170 mm. In the macroscopic classification, there were 55 superficial lesions (71.4%) (0-IIa, 0-IIb, 0-IIc, 0-IIa + IIc, 0-IIc + IIb and 0-IIc + III), the number of which was the largest. In terms of tumor depth, there were 52 lesions (67.5%) of early-stage gastric cancer (T1) and 25 lesions (32.5%) of advanced-stage gastric cancer (T2-T4). [0063] [0063] [Method for the Assessment Test] In this assessment test, the assessment test data sets were introduced in a device supporting diagnostic imaging based on a convolutional neural network in which a learning process was performed using the learning data sets and it was assessed whether gastric cancer was correctly detected from each of the endoscopic images constituting the assessment test data sets. Correct detection of gastric cancer was seen as a "correct answer". By detecting gastric cancer (injury) from endoscopic images, the convolutional neural network produces the name of the injury (early-stage gastric cancer or advanced-stage gastric cancer), the location of the injury and the probability classification. [0064] [0064] Among the gastric cancers present in the endoscopic images constituting the assessment test data sets, several gastric cancers were present in a plurality of endoscopic images. Thus, the assessment test was conducted using the following definitions. [0065] [0065] (Definition 1) When the convolutional neural network detected the same (one) gastric cancer in a plurality of endoscopic images, this case was considered as a correct answer. FIGS. 6A and 6B are diagrams describing the presence of the same cancer in a plurality of endoscopic images. In FIGS. 6A and 6B, the rectangular frames (54) and (56) indicate the lesion sites (bands) of gastric cancer, which were manually established by the doctor. The rectangular frame (58) indicates a lesion site (band) of gastric cancer, which was estimated by the convolutional neural network. FIG. 6A illustrates an endoscopic image in which gastric cancer has been shown as a background and FIG. 6B illustrates an endoscopic image in which gastric cancer was within the near field of view. As illustrated in FIGS. 6A and 6B, the convolutional neural network failed to detect gastric cancer in the background, while the convolutional neural network was successful in detecting gastric cancer in the foreground. [0066] [0066] (Definition 2) When false positive lesions (gastric cancers) detected in different endoscopic images were the same lesion, these lesions were considered as a single lesion. [0067] [0067] (Definition 3) In some cases, the boundary of a lesion site (band) of gastric cancer is not clear. Thus, when a part of gastric cancer was detected by the convolutional neural network, this case was considered a correct answer. FIG. 7 is a diagram describing a difference between an injury site (band) diagnosed by the doctor and an injury site (band) diagnosed by the convolutional neural network. In FIG. 7, the rectangular frame (60) indicates a lesion site (band) of gastric cancer, which was established manually by the doctor. The rectangular frame (62) indicates a lesion site (band) of gastric cancer, which was estimated by the convolutional neural network. [0068] [0068] In this assessment test, in addition, the sensitivity and positive predictive value (PPV) for the diagnostic capacity of the convolutional neural network to detect gastric cancer were calculated using the following equations 1 and 2. [0069] [0069] [Evaluation Test Results] The convolutional neural network completed a process for the analysis of 2296 endoscopic images constituting the evaluation test data sets in as short a time as 47 seconds. In addition, the convolutional neural network detected 71 gastric cancers out of 77 gastric cancers (lesions). That is, the sensitivity for the diagnostic capacity of the convolutional neural network was 92.2%. [0070] [0070] FIGS. 8A and 8B are diagrams illustrating an example of an endoscopic image and an image resulting from the analysis. FIG. 8A illustrates an endoscopic image in which a flat, slightly red lesion is represented over the smallest curvature of the median gastric body. Since gastric cancer appears to atrophy of the fundus mucosa, it appeared to be difficult even for an endoscopist to detect gastric cancer from the endoscopic image in FIG. 8A. FIG. 8B illustrates an image resulting from the analysis indicating that the convolutional neural network was successful in detecting gastric cancer (0-IIc, 5 mm, tub1, T1a). In FIG. 8B, the rectangular frame (64) indicates a lesion site (band) of gastric cancer, which was manually established by the doctor. The rectangular frame (66) indicates a lesion site (band) of gastric cancer, which was estimated by the convolutional neural network. [0071] [0071] FIG. 9 is a diagram illustrating a change in sensitivity depending on different tumor depths and tumor sizes in this assessment test. [0072] [0072] On the other hand, the convolutional neural network did not detect six gastric cancers. [0073] [0073] FIG. 10 is a diagram illustrating the details of the lesions (gastric cancers) not detected by the convolutional neural network. FIGS. 11A, 11B, 11C, 11D, 11E and 11F are diagrams illustrating endoscopic images (images resulting from the analysis) in which lesions not detected by the convolutional neural network are present. [0074] [0074] In FIG. 11A, the rectangular frame (70) indicates a lesion site (band) of gastric cancer (a greater antral curvature, 0-IIc, 3 mm, tub1, T1a), which was not detected by the convolutional neural network. In FIG. 11B, the rectangular frame (72) indicates a lesion site (band) of gastric cancer (the minor curvature of the median gastric body, 0-IIc, 4 mm, tub1, T1a), which was not detected by the convolutional neural network. [0075] [0075] In FIG. 11C, the rectangular frame (74) indicates a lesion site (band) of gastric cancer (the posterior antral wall, 0-IIc, 4 mm, tub1, T1a), which was not detected by the convolutional neural network. In FIG. 11D, the rectangular frame (76) indicates a lesion site (band) of gastric cancer (the posterior antral wall, 0-IIc, 5 mm, tub1, T1a), which was not detected by the convolutional neural network. [0076] [0076] In FIG. 11E, the rectangular frame (78) indicates a lesion site (band) of gastric cancer (the largest antral curvature, 0-IIc, 5 mm, tub1, T1a), which was not detected by the convolutional neural network. The rectangular frame (80) indicates a non-cancerous lesion (pylorus) estimated as gastric cancer by the convolutional neural network. In FIG. 11F, the rectangular frame (82) indicates a lesion site (band) of gastric cancer (the anterior wall of the lower gastric body, 0-IIc, 16 mm, tub1, T1a), which was not detected by the convolutional neural network. [0077] [0077] In addition, the convolutional neural network detected 161 non-cancerous lesions such as gastric cancer. The positive predictive value was 30.6%. FIG. 12 is a diagram illustrating the details of non-cancerous lesions detected as gastric cancer by the convolutional neural network. As illustrated in FIG. 12, substantially half of the non-cancerous lesions detected as gastric cancer were gastritis with changes in color or irregular changes in the mucosal surface. Such gastritis is difficult for an endoscopist to distinguish from gastric cancer in many cases, and the positive predictive value (PPV) of the diagnosis of gastric cancer through gastric biopsy is reported to be 3.2 to 5.6%. For the clinical diagnosis of cancer, failure to detect cancer leads to a loss of opportunity to treat the patient. Likewise, the severity of false negative errors is greater than that of false positive errors. In view of the low PPV of biopsies performed by the endoscopist, the PPV of the convolutional neural network is considered to be clinically acceptable enough. [0078] [0078] FIGS. 13A, 13B and 13C are diagrams illustrating images resulting from the analysis containing non-cancerous lesions detected as gastric cancer by the convolutional neural network. In FIG. 13A, the rectangular frame (84) indicates a lesion site (band) of gastritis (intestinal metaplasia with irregular mucosal surface structure) detected as gastric cancer by the convolutional neural network. In FIG. 13B, the rectangular frame (86) indicates a lesion site (band) of gastritis (white mucosa due to local atrophy) detected as gastric cancer by the convolutional neural network. In FIG. [0079] [0079] Next, a second evaluation test to determine the beneficial effects achieved with the configuration of the realization described above will be presented. [0080] [0080] [Preparation of Learning Data Sets] Endoscopic images of 12895 examples of large intestine endoscopy, which were performed from December 2013 to March 2017, were prepared as learning data sets (teaching data) to be used to learn a convolutional neural network in a device to support diagnostic imaging. Endoscopic images contain adenocarcinoma, adenoma, hyperplastic polyp, SSAP (sessile serenated adenoma / polyp), juvenile polyp, Peutz-Jeghers polyp, inflammatory polyp, lymphoid aggregate and so on, which have been histologically proven by a certified pathologist performed for screening in daily clinical practice or preoperative examination and endoscopic images were collected using standard endoscopic video systems (EVIS LUCERA: CF TYPE H260AL / I, PCF TYPE Q260IA, Q260AZI, H290I, H290Z, Olympus Medical Systems Corp .). [0081] [0081] FIG. 14A illustrates an endoscopic image of the large intestine that contains protruding adenoma. FIG. 14B illustrates an endoscopic image of the large intestine that contains a flattened tumor (see dashed line (90)). FIG. 14C illustrates an endoscopic image of the large intestine that contains protruding hyperplastic polyp. FIG. 14D illustrates an endoscopic image of the large intestine that contains a flattened hyperplastic polyp (see dashed line (92)). FIG. 14E illustrates an endoscopic image of the large intestine that contains protruding SSAP. FIG. 14F illustrates an endoscopic image of the large intestine that contains flattened SSAP (see dashed line (94)). [0082] [0082] Endoscopic images serving as learning data sets included endoscopic images captured with white light illumination of the individual's large intestine and endoscopic images captured with narrow band light illumination (eg NBI narrow band light) from large intestine of the individual. [0083] [0083] FIG. 15A illustrates an endoscopic image captured with white light illumination of a Peutz-Jeghers polyp in the individual's large intestine. FIG. 15B illustrates an endoscopic image captured with narrowband light illumination (NBI narrowband light) of the Peutz-Jeghers polyp in the subject's large intestine. [0084] [0084] Finally, 20431 endoscopic images for 4752 histologically proven colorectal polyps were collected as learning data sets and 4013 endoscopic images for normal colonic mucosa were collected. In the collected endoscopic images, the precise configuration for marking to obtain the characteristic was performed on the lesion names (types) and lesion sites of all colorectal polyps. FIG. 16 is a diagram illustrating the characteristics of colorectal polyps and the like related to endoscopic images used for learning data sets. In FIG. 16, if a single endoscopic image contained a plurality of colorectal polyps, each of the plurality of colorectal polyps was counted as a different endoscopic image. [0085] [0085] [Learning / Algorithm] In order to build a device to support diagnostic imaging, a convolutional neural network based on a Single-Shot Multibox Detector (“Single-Shot Multibox Detector” - SSD, https: / /arxiv.org/abs/1512.02325) and consists of 16 or more layers. The Caffe deep learning framework, which was developed at the Berkeley Vision and Learning Center (BVLC), was used for learning and an assessment test. All layers of the convolutional neural network are finely adjusted with an overall learning rate of 0.0001 by using the stochastic downward gradient method. In order to ensure compatibility with CNN, each image was scaled to 300 300 pixels. According to the resizing of each image, the size of the mark was changed to the site of the lesion. [0086] [0086] [Preparation of Evaluation Test Data Sets] In order to assess the diagnostic accuracy of the diagnostic imaging support device based on constructed convolutional neural network, 6759 endoscopic images (large intestine) for 174 patients who were subjected to EGD as a normal clinical examination from January 1, 2017 to March 31, 2017, including 885 endoscopic images showing colorectal polyps, were collected as assessment test data sets. In order to assess the diagnostic accuracy of the diagnostic imaging support device in the normal clinical examination, endoscopic images with fecal contamination or related to insufficient air supply were also collected as assessment test data sets. However, endoscopic images with inflammatory bowel disease were excluded from the assessment test data sets, as they could alter the diagnosis result. In addition, endoscopic images with bleeding after biopsy and endoscopic images after endoscopic treatment were also excluded from the assessment test data sets. The endoscopic images used as the assessment test data sets included, as the learning data sets, endoscopic images captured with white light illumination of the individual's large intestine and endoscopic images captured with narrow band light illumination (for example , narrow band light (NBI) of the individual's large intestine. FIG. 17 is a diagram illustrating the characteristics of colorectal and similar polyps related to endoscopic images used for the evaluation test data sets. In FIG. 17, if a single endoscopic image contained a plurality of colorectal polyps, each of the plurality of colorectal polyps was counted as a different endoscopic image. [0087] [0087] [Assessment Test Method] In this assessment test, the assessment test data sets were entered into a diagnostic imaging support device based on a convolutional neural network in which a learning process was performed using learning data sets were evaluated and it was assessed whether a colorectal polyp was correctly detected from each of the endoscopic images constituting the assessment test data sets. Correct detection of a colorectal polyp was considered to be a "correct answer". In the detection of a colorectal polyp from endoscopic images, the convolutional neural network produces the name of the lesion (type), the location of the lesion and the probability classification. [0088] [0088] In order to obtain the results of the assessment test, the assessment test was conducted using the following definitions. [0089] [0089] (Definition 1) In this assessment test, if a lesion site (band) of a colorectal polyp diagnosed by the convolutional neural network overlapped an area corresponding to 80% or more of a lesion site (band) of a colorectal polyp diagnosed by the doctor, it was determined that the convolutional neural network correctly detected a colorectal polyp from an endoscopic image and this case was considered as a correct answer. [0090] [0090] In this assessment test, in addition, the sensitivity and positive predictive value (PPV) for the diagnostic capacity of the convolutional neural network to detect a colorectal polyp were calculated using the following equations 1 and 2. [0091] [0091] [Evaluation Test Results] The convolutional neural network ended the analysis process of the endoscopic images constituting the evaluation test data sets at a speed as high as 48.7 images / second (that is, the time of image processing by endoscopic image: 20 ms). In addition, the convolutional neural network estimated the lesion sites of 1247 colorectal polyps in the endoscopic images constituting the assessment test data sets and correctly detected 1073 colorectal polyps of 1172 true (histologically proven) polyps. The sensitivity and positive predictive value for the diagnostic capacity of the convolutional neural network were 92% and 86%, respectively. [0092] [0092] Specifically, in the endoscopic images captured with white light illumination of the individual's large intestine, the sensitivity and positive predictive value for the diagnostic capacity of the convolutional neural network were 90% and 82%, respectively. In endoscopic images captured with narrowband light (NBI narrowband light) of the individual's large intestine, the sensitivity and positive predictive value for the diagnostic capacity of the convolutional neural network were 97% and 98%, respectively. [0093] [0093] In addition, the convolutional neural network estimated the lesion sites of 1143 colorectal polyps in endoscopic images constituting the assessment test data sets (including true colorectal polyps less than 10 mm) and correctly detected 969 colorectal polyps of 1143 true colorectal polyps. The sensitivity and positive predictive value for the diagnostic capacity of the convolutional neural network were 92% and 85%, respectively. [0094] [0094] In order to increase the diagnostic capacity of the convolutional neural network, it is important to identify the reason why the convolutional neural network failed to correctly detect the true colorectal polyps, that is, the convolutional neural network did not detect the true colorectal polyps. Likewise, the present inventors reviewed all endoscopic images (false positive images) in which colorectal polyps were incorrectly detected by the convolutional neural network and all endoscopic images (false negative images) in which true colorectal polyps were not detected by the network convolutional neural and classified the images in several categories. [0095] [0095] FIG. 18 is a diagram illustrating the results of the classification of false positive images and false negative images. As illustrated in FIG. 18, among 165 images of false positives, 64 images of false positives (39%) showed normal structures easily distinguishable from colorectal polyps, most of which were images of the ileocecal valve (N = 56). In addition, 55 false positive images (33%) showed colonic folds, most of which were images related to insufficient air supply. Other false positive images (20%) contained anomalously artificially generated images easily distinguishable from true colorectal polyps and caused by the formation of a halo (N = 14), blurring of the camera lens surface (N = 4), a blur (N = 2) or feces (N = 4). In addition, 12 false positive images (7%) were suspected to be true polyps, but were not finally confirmed. [0096] [0096] As illustrated in FIG. 18, moreover, in 50 false negative images (56%) out of 89 false negative images, colorectal polyps were considered to have not been detected as true colorectal polyps by the convolutional neural network, since colorectal polyps were small or so dark that the surface texture of colorectal polyps was less recognizable. [0097] [0097] The present inventors also reviewed the degree of combination between the classification of colorectal polyps detected and classified by the convolutional neural network (CNN classification) and the classification of histologically proven colorectal polyps (histological classification) as the neural network classification accuracy convolutional. FIGS. 19A and 19B are diagrams illustrating the degrees of combination between the CNN classification and the histological classification. [0098] [0098] As illustrated in FIG. 19A, in endoscopic images captured with white light illumination of the individual's large intestine, the classification of colorectal polyps that accounted for 83% of the total was correctly performed by the convolutional neural network. Colorectal polyps that accounted for 97% of colorectal polyps histologically proven to be adenomas were correctly classified as adenomas by the convolutional neural network. The positive predictive value and the negative predictive value for the diagnostic capacity (classification capacity) of the convolutional neural network were 86% and 85%, respectively. In addition, colorectal polyps that accounted for 47% of colorectal polyps histologically proven to be hyperplastic polyp were correctly classified as hyperplastic polyp by the convolutional neural network. The positive predictive value and the negative predictive value for the diagnostic capacity (classification capacity) of the convolutional neural network were 64% and 90%, respectively. In addition, many of the colorectal polyps histologically proven to be SSAPs were incorrectly classified as adenomas (26%) or hyperplastic polyp (52%) by the convolutional neural network. [0099] [0099] As illustrated in FIG. 19B, in the endoscopic images captured with illumination with narrow band light (NBI narrow band light) of the individual's large intestine, the classification of colorectal polyps that accounted for 81% of the total was correctly performed by the convolutional neural network. Colorectal polyps that accounted for 97% of colorectal polyps histologically proven to be adenomas were correctly classified as adenomas by the convolutional neural network. The positive predictive value and the negative predictive value for the diagnostic capacity (classification capacity) of the convolutional neural network were 83% and 91%, respectively. [0100] [0100] The present inventors also revised, for colorectal polyps of 5 mm or less, the degree of combination between the classification of colorectal polyps detected and classified by the convolutional neural network (CNN classification) and the classification of histologically proven colorectal polyps (histological classification) as the classification precision of the convolutional neural network. FIG. 20 is a diagram illustrating the degrees of combination between the CNN classification and the histological classification for colorectal polyps of 5 mm or less. [0101] [0101] As illustrated in FIG. 20, in endoscopic images captured with white light illumination of the individual's large intestine, colorectal polyps (N = 348) that accounted for 98% of colorectal polyps histologically proven to be adenomas (N = 356) were correctly classified as adenomas by the convolutional neural network . The positive predictive value and the negative predictive value for the diagnostic capacity (classification capacity) of the convolutional neural network were 85% and 88%, respectively. In addition, colorectal polyps that accounted for 50% of colorectal polyps histologically proven to be hyperplastic polyp were correctly classified as hyperplastic polyp by the convolutional neural network. The positive predictive value and the negative predictive value for the diagnostic capacity (classification capacity) of the convolutional neural network were 77% and 88%, respectively. In addition, although not streaked, in endoscopic images captured under narrow band light (NBI narrow band light) of the individual's large intestine, colorectal polyps (N = 138) which accounted for 97% of histologically proven colorectal polyps as adenomas (N = 142) were correctly classified as adenomas by the convolutional neural network. The positive predictive value and the negative predictive value for the diagnostic capacity (classification capacity) of the convolutional neural network were 84% and 88%, respectively. The results illustrated in FIGS. 19A, 19B and 20 apparently indicate that the diagnostic capabilities (classification capabilities) of the convolutional neural network are the same regardless of the sizes of colorectal polyps. [0102] [0102] As indicated from the results of the second assessment test described above, the convolutional neural network effectively detects a colorectal polyp with considerable accuracy at a remarkable speed even if the colorectal polyp is small and is probably useful in reducing detection failures of colorectal polyps in endoscopy of the large intestine. It is also indicated that the convolutional neural network is able to correctly classify the detected colorectal polyps and greatly assist an endoscopist in the diagnosis based on endoscopic images. [0103] [0103] FIGS. 21A, 21B, 21C, 21D, 21E and 21F and FIGS. 22A, 22B, 22C, 22D, 22E, [0104] [0104] FIG. 21B illustrates an endoscopic image and an image resulting from the analysis containing a colorectal polyp (hyperplastic polyp) that was correctly detected and classified by the convolutional neural network. As illustrated in FIG. 21B, the image resulting from the analysis shows the rectangular frame (114) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (hyperplastic polyp: hyperplastic) and a probability rating (0.83). The rectangular frame (116) indicates, for reference, a lesion site (band) of a histologically proven colorectal polyp (hyperplastic polyp) and is not shown in the image resulting from the actual analysis. [0105] [0105] FIG. 21C illustrates, as a false negative image, an endoscopic image containing a colorectal polyp (adenoma) that was not detected by the convolutional neural network. The rectangular frame (118) indicates, for reference, a lesion site (band) of a histologically proven colorectal polyp (adenoma) and is not shown in the image resulting from the actual analysis. [0106] [0106] FIG. 21D illustrates, as images of false positives, an endoscopic image and an image resulting from the analysis containing a normal colon fold that was incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 21D, the image resulting from the analysis shows the rectangular frame (120) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (adenoma: hyperplasic) and a probability classification (0.70). [0107] [0107] FIG. 21E illustrates an endoscopic image and an image resulting from the analysis containing a colorectal polyp (adenoma) whose lesion site (band) was correctly detected by the convolutional neural network, but which was incorrectly classified. [0108] [0108] FIG. 21F illustrates an endoscopic image and an image resulting from the analysis containing a colorectal polyp (hyperplastic polyp) whose lesion site (band) was correctly detected by the convolutional neural network, but which was incorrectly classified. As illustrated in FIG. 21F, the image resulting from the analysis shows the rectangular frame (126) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (adenoma: Adenoma) and a probability classification (0.62). A rectangular frame (128) indicates, for reference, a lesion site (band) of a histologically proven colorectal polyp (hyperplastic polyp) and is not shown in the image resulting from the actual analysis. [0109] [0109] FIG. 22A illustrates, as a false negative image, an endoscopic image containing a colorectal polyp (adenoma) that was not detected by the convolutional neural network. Rectangular tables (130) and (132) indicate, for reference, lesion sites (bands) of histologically proven colorectal polyps (adenomas) [0110] [0110] FIG. 22B illustrates, as a false negative image, an endoscopic image containing a colorectal polyp (adenoma) that was not detected by the convolutional neural network. The rectangular frame (134) indicates, for reference, a lesion site (band) of a histologically proven colorectal polyp (adenoma) and is not shown in the image resulting from the actual analysis. The colorectal polyp (adenoma) indicated by the rectangular frame (134) was dark and was less possible to be recognized. Thus, the colorectal polyp is considered to have not been detected by the convolutional neural network. [0111] [0111] FIG. 22C illustrates, as a false negative image, an endoscopic image containing a colorectal polyp (adenoma) that was not detected by the convolutional neural network. The rectangular frame (136) indicates, for reference, a lesion site (band) of a histologically proven colorectal polyp (adenoma) and is not shown in the image resulting from the actual analysis. The colorectal polyp (adenoma) indicated by the rectangular frame (136) was captured on both sides or a part of it was captured. Thus, the colorectal polyp is considered to have not been detected by the convolutional neural network. [0112] [0112] FIG. 22D illustrates, as a false negative image, an endoscopic image containing a colorectal polyp (adenoma) that was not detected by the convolutional neural network. The rectangular frame (138) indicates, for reference, a lesion site (band) of a histologically proven colorectal polyp (adenoma) and is not shown in the image resulting from the actual analysis. The colorectal polyp (adenoma) indicated by the rectangular frame (138) was very large and was less possible to be recognized. [0113] [0113] FIG. 22E illustrates, as false positive images, an endoscopic image and an image resulting from the analysis containing an ileocecal valve (normal structure) that was incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 22E, the image resulting from the analysis shows the rectangular frame (140) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (Others: Others) and a probability classification (0.62). [0114] [0114] FIG. 22F illustrates, as false positive images, an endoscopic image and an image resulting from the analysis containing a normal colon fold that was incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 22F, the image resulting from the analysis shows the rectangular frame (142) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (adenoma: Adenoma) and a probability classification (0.32). [0115] [0115] FIG. 22G illustrates, as false positive images, an endoscopic image and an image resulting from the analysis containing a halo formation (anomalously artificial image) that was incorrectly detected and classified by the convolutional neural network. [0116] [0116] FIG. 22H illustrates, as false positive images, an endoscopic image and an image resulting from the analysis containing a polyp that was incorrectly detected and classified by the convolutional neural network. The polyp was suspected to be a true polyp, but has not finally been confirmed. As illustrated in FIG. 22H, the image resulting from the analysis shows the rectangular frame (146) indicating a lesion site (band) estimated by the convolutional neural network, a lesion name (hyperplastic polyp: [0117] [0117] In the following, a third evaluation test will be described to determine the advantageous effects achieved with the configuration of the implementation described above. [0118] [0118] [Preparation of Learning Data Sets] Endoscopic images of the esophagus, the number of which was 8428 (384 patients), which were obtained from February 2016 to April 2017, were prepared as learning data sets ( teaching data) to be used for learning a convolutional neural network in a support device for the formation of diagnostic images. Endoscopic images contain esophageal cancer histologically proven by a certified pathologist (specifically, squamous cell carcinoma (ESCC) or adenocarcinoma (EAC)). Endoscopy was performed for determination in daily clinical practice or preoperative examination and endoscopic images were collected using standard endoscopes (GIF-H290Z, GIF-H290, GIF-XP290N, GIF-H260Z, GIF-H260, Olympus Medical Systems Corp., Tokyo) and standard endoscopic video systems (EVIS LUCERA CV-260 / CLV-260 eVIS LUCERA ELITE CV-290 / CLV-290SL, Olympus Medical Systems Corp.). [0119] [0119] Endoscopic images serving as learning data sets included endoscopic images captured with illumination with white light from the individual's esophagus and endoscopic images captured with illumination with narrow band light (NBI narrow band light) from the individual's esophagus. Endoscopic images with poor image quality due to halo formation, lens fogging, defocusing, mucus, insufficient air supply or the like were excluded from the learning data sets. [0120] [0120] Finally, 8428 endoscopic images for histologically proven esophageal cancers were collected as learning data sets. Endoscopic images contained 397 lesions of squamous cell carcinoma, which were 332 lesions of superficial esophageal cancer and 65 lesions of gastric cancer in advanced stage and 32 lesions of adenocarcinoma, which were 19 lesions of superficial esophageal cancer and 13 lesions of gastric cancer in advanced stage. An experienced endoscopist with 2000 or more cases of upper endoscopy performed precise manual setting of marking to obtain characteristic of lesion names (superficial esophageal cancer or advanced esophageal cancer) and lesion sites of all esophageal cancers (cell carcinoma scaly or adenocarcinoma) in the collected endoscopic images. [0121] [0121] [Learning / Algorithm] In order to build a device to support diagnostic imaging, a convolutional neural network based on a Single-Shot Multibox Detector (SSD, https://arxiv.org/abs/1512.02325 ) and consists of 16 or more layers. The Caffe deep learning framework, which was developed at the Berkeley Vision and Learning Center (BVLC), was used for learning and an assessment test. All layers of the convolutional neural network are finely adjusted with an overall learning rate of 0.0001 by using the stochastic downward gradient method. In order to ensure compatibility with CNN, each image was scaled to 300 300 pixels. According to the resizing of each image, the size of the mark was changed to the site of the lesion. [0122] [0122] [Preparation of Evaluation Test Data Sets] In order to assess diagnostic accuracy, the diagnostic imaging support device based on a constructed convolutional neural network, 1118 endoscopic images (esophagus) for 97 patients (47 patients: presenting 49 esophageal cancer lesions, 50 patients: not presenting esophageal cancer) who underwent endoscopy as a normal clinical examination, were collected as assessment test data sets. As a result, of the 47 patients, 45 patients had 1 esophageal cancer lesion and 2 patients had 2 esophageal cancer lesions. The endoscopic images used as assessment test data sets included, as the learning data sets, endoscopic images captured with white light illumination of the individual's esophagus and endoscopic images captured with narrow band light illumination (narrow band light NBI) of the individual's esophagus. [0123] [0123] FIG. 23 is a diagram illustrating the characteristics of patients (n = 47) and injuries (n = 49) related to endoscopic images used for the assessment test data sets. As illustrated in FIG. 23, the average size of the tumors (diameters) was 20 mm and the range of tumor sizes (diameters) was 5 to 700 mm. In the macroscopic classification, there were 43 superficial type lesions (type 0-I, type 0-IIa, type 0-IIb and type 0-IIc), the number of which was greater than that of the advanced type (6 lesions). In terms of tumor depth, there were 42 lesions of superficial esophageal cancer (cancer of the mucosa: T1a, cancer of the sub-mucosa: T1b) and 7 lesions of gastric cancer in advanced stage (T2-T4). In histopathology, there were 41 lesions of squamous cell carcinoma and 8 lesions of adenocarcinoma. [0124] [0124] [Method for the Assessment Test] In this assessment test, the assessment test data sets were entered into a diagnostic imaging support device based on a convolutional neural network in which a learning process was carried out using the learning data sets and it was assessed whether esophageal cancer was correctly detected from each of the endoscopic images constituting the assessment test data sets. Correct detection of esophageal cancer was considered a "correct answer". In the detection of esophageal cancer from endoscopic images, the convolutional neural network produces the name of the lesion (superficial esophageal cancer or advanced esophageal cancer), the location of the lesion and the probability classification. [0125] [0125] In order to obtain the results of the assessment test, the assessment test was conducted using the following definitions. [0126] [0126] (Definition 1) In this assessment test, if the convolutional neural network detected at least part of the esophageal cancer, it was determined that the convolutional neural network detected esophageal cancer and this case was considered as a correct answer. This is because it can be difficult to recognize the entire border of esophageal cancer in some endoscopic images. Note that this evaluation test even when esophageal cancer is in fact present within a rectangular frame indicating a lesion site (band) of the esophageal cancer detected by the convolutional neural network, it was determined that the convolutional neural network failed to detect esophageal cancer if the rectangular frame includes a non-esophageal cancer site in a wide range (80% or more of the endoscopic image). [0127] [0127] In this assessment test, in addition, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for the diagnostic capacity of the convolutional neural network to detect esophageal cancer in each endoscopic image were calculated using the following equations 1 to 4. [0128] [0128] [Evaluation Test Results] The convolutional neural network completed a process for the analysis of 1118 endoscopic images constituting the evaluation test data sets in 27 seconds. Notably, the convolutional neural network correctly detected all (seven) esophageal cancers whose tumor sizes were less than 10 mm. The positive predictive value for the diagnostic capacity of the convolutional neural network was 40%, with shadow and normal structures being misdiagnosed, while the negative predictive value was 95%. In addition, the convolutional neural network correctly detected the classification of esophageal cancers (superficial esophageal cancer or advanced esophageal cancer) with an accuracy of 98%. [0129] [0129] FIGS. 24A, 24B, 24C and 24D are diagrams illustrating an example of endoscopic images and images resulting from the analysis in the third evaluation test. FIG. 24A illustrates an endoscopic image (an endoscopic image captured with white light illumination of the individual's esophagus) and an image resulting from the analysis containing esophageal cancer that was correctly detected and classified by the convolutional neural network. As illustrated in FIG. 24A, the image resulting from the analysis shows the rectangular frame (150) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.91). The rectangular frame (152) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0130] [0130] FIG. 24B corresponds to FIG. 24A and illustrates an endoscopic image (an endoscopic image captured with NBI narrowband light illumination of the individual's esophagus) and an image resulting from the analysis containing esophageal cancer that was correctly detected and classified by the convolutional neural network. As illustrated in FIG. 24B, the image resulting from the analysis shows the rectangular frame (154) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.97). The rectangular frame (156) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0131] [0131] FIG. 24C illustrates, as a false negative image, an endoscopic image (an endoscopic image captured with white light illumination of the individual's esophagus) containing esophageal cancer that was not detected by the convolutional neural network. The rectangular frame (158) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0132] [0132] FIG. 24D corresponds to FIG. 24C and illustrates an endoscopic image (an endoscopic image captured with NBI narrowband light illumination of the individual's esophagus) and an image resulting from the analysis containing esophageal cancer that was correctly detected and classified by the convolutional neural network. As illustrated in FIG. 24D, the image resulting from the analysis shows the rectangular frame (160) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.98). The rectangular frame (162) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0133] [0133] FIG. 25 is a diagram illustrating the results of the detection of oesophageal cancer / non-oesophageal cancer by the convolutional neural network and the results of the detection of oesophageal cancer / non-oesophageal cancer by biopsy for the cases (oesophageal cancer) of 47 patients with oesophageal cancer and the cases (non-esophageal cancer) of 50 patients with no esophageal cancer. In FIG. 25, in this evaluation test, when the convolutional neural network correctly detected oesophageal cancer / non-oesophageal cancer in an endoscopic image captured with at least one entry illumination with white light and illumination with narrow band light NBI of the individual's esophagus in diagnostic results understandable, it was determined that the convolutional neural network correctly detected esophageal cancer / non-esophageal cancer. As illustrated in FIG. 25, in an understandable diagnosis, the convolutional neural network correctly detected esophageal cancer in 98% (46/47) of the cases of esophageal cancer present in endoscopic images. In addition, although not illustrated, the convolutional neural network correctly detected all esophageal cancers whose tumor sizes were less than 10 mm. [0134] [0134] FIG. 26 is a diagram illustrating, in the cases illustrated in FIG. 25, the sensitivity for endoscopic images captured with white light illumination (hereinafter referred to as sensitivity with white light), the sensitivity for endoscopic images captured with illumination with narrow band light NBI (hereinafter referred to as sensitivity with narrow band light NBI ) and the sensitivity for endoscopic images captured with illumination with at least one between white light and NBI narrowband light (hereinafter referred to as comprehensive sensitivity). As illustrated in FIG. 26, in the cases illustrated in FIG. 25, the sensitivity with NBI narrowband light (89%) was greater than the sensitivity with white light (81%) and the comprehensive sensitivity (98%) was much higher than the sensitivity with white light. The sensitivity with white light, the sensitivity with narrow band light NBI and the comprehensive sensitivity for squamous cell carcinoma were 79%, 89% and 97%, respectively. Sensitivity with white light, sensitivity with NBI narrowband light and comprehensive sensitivity for adenocarcinoma were 88%, 88% and 100%, respectively. [0135] [0135] FIG. 27 is a diagram illustrating the results of the detection of oesophageal cancer / non-oesophageal cancer by the convolutional neural network and the results of the detection of oesophageal cancer / non-oesophageal cancer by means of biopsy for endoscopic images captured with white light illumination or light illumination narrow band NBI. FIG. 28 is a diagram illustrating, in the endoscopic images illustrated in FIG. 27, sensitivity for endoscopic images captured with white light illumination (hereinafter referred to as sensitivity with white light) and sensitivity for endoscopic images captured with illumination with narrow band light NBI (hereinafter referred to as sensitivity with narrow band light NBI) . [0136] [0136] As illustrated in FIG. 27, the convolutional neural network correctly detected esophageal cancer in 74% (125/168) of endoscopic images for which the presence of esophageal cancer was diagnosed as a result of biopsies. The sensitivity, specificity, positive predictive value and negative predictive value for the diagnostic capacity of the convolutional neural network were 74%, 80%, 40% and 95%, respectively. As illustrated in FIG. 28, sensitivity with NBI narrowband light (81%) was greater than sensitivity with white light (69%). The sensitivity with white light and the sensitivity with narrow band light NBI for squamous cell carcinoma were 72% and 84%, respectively. The sensitivity with white light and the sensitivity with narrow band light NBI for adenocarcinoma were 55% and 67%, respectively. [0137] [0137] The present inventors reviewed the degree of combination between the classification of esophageal cancer detected and classified by the convolutional neural network (CNN classification) and the classification of a histologically proven esophageal cancer (depth of invasion) as the precision of classification of the network convolutional neural. [0138] [0138] As illustrated in FIG. 29, in the endoscopic images captured with white light illumination of the individual's esophagus, the classification of esophageal cancers that accounted for 100% (89/89) of the total was correctly performed by the convolutional neural network. That is, esophageal cancers that accounted for 100% (75/75) of esophageal cancers histologically proven to be superficial esophageal cancer were correctly classified as superficial esophageal cancer by the convolutional neural network. Esophageal cancers that accounted for 100% (14/14) of esophageal cancers histologically proven to be advanced esophageal cancer were correctly classified as advanced esophageal cancer by the convolutional neural network. [0139] [0139] In the endoscopic images captured with NBI narrow band light illumination of the individual's esophagus, the classification of esophageal cancers that accounted for 96% (76/79) of the total was correctly performed by the convolutional neural network. Esophageal cancers that accounted for 99% (67/68) of esophageal cancers histologically proven to be superficial esophageal cancer were correctly classified as superficial esophageal cancer by the convolutional neural network. In addition, esophageal cancers that accounted for 82% (9/11) of esophageal cancers histologically proven to be advanced esophageal cancer were correctly classified as advanced esophageal cancer by the convolutional neural network. [0140] [0140] In endoscopic images captured with white light or NBI narrow band light from the individual's esophagus, the classification of esophageal cancers that accounted for 98% (165/168) of the total were correctly classified by the convolutional neural network. Esophageal cancers that accounted for 99% (142/143) of esophageal cancers histologically proven to be superficial esophageal cancer were correctly classified as superficial esophageal cancer by the convolutional neural network. In addition, esophageal cancers that accounted for 92% (23/25) of esophageal cancers histologically proven to be advanced esophageal cancer were correctly classified as advanced esophageal cancer by the convolutional neural network. As described above, the classification accuracy of the convolutional neural network was considered to be very high. The classification accuracy of the convolutional neural network for squamous cell carcinoma and adenocarcinoma were 99% (146/147) and 90% (19/21), respectively. [0141] [0141] In order to increase the diagnostic capacity of the convolutional neural network, it is important to identify the reason why the convolutional neural network incorrectly detected esophageal cancer and the reason why the convolutional neural network failed to correctly detect true esophageal cancer, this that is, the convolutional neural network has lost true esophageal cancer. Likewise, the present inventors reviewed all endoscopic images (images of false positives) in which esophageal cancers were incorrectly detected by the convolutional neural network and all endoscopic images (images of false negatives) in which true esophageal cancers were not detected through the convolutional neural network and classified in the images in several categories. [0142] [0142] FIG. 30 is a diagram illustrating the results of the classification of false positive images and false negative images. As illustrated in FIG. 30, among 188 false positive images, 95 false positive images (50%) contained shadow. [0143] [0143] FIG. 31A is a diagram illustrating, as false positive images, an endoscopic image and an image resulting from the analysis containing shadows, which were incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 31A, the image resulting from the analysis shows the rectangular frame (170) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.70). [0144] [0144] FIG. 31B is a diagram illustrating, as images of false positives, an endoscopic image and an image resulting from the analysis containing a normal structure (the gastroesophageal junction) easily identified as esophageal cancer, which was incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 31B, the image resulting from the analysis shows the rectangular frame (172) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.57). [0145] [0145] FIG. 31C is a diagram illustrating, as images of false positives, an endoscopic image and an image resulting from the analysis containing a normal structure (the left main bronchus) easily identified as esophageal cancer, which was incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 31C, the image resulting from the analysis shows the rectangular frame (174) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.60). [0146] [0146] FIG. 31D is a diagram illustrating, as images of false positives, an endoscopic image and an image resulting from the analysis containing a normal structure (the vertebral body) easily identified as esophageal cancer, which was incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 31D, the image resulting from the analysis shows the rectangular frame (176) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.80). [0147] [0147] FIG. 31E is a diagram illustrating, as false positive images, an endoscopic image and an analysis result image containing a benign lesion [0148] [0148] FIG. 31F is a diagram illustrating, as images of false positives, an endoscopic image and an image resulting from the analysis containing a benign lesion (focal atrophy) that could be misdiagnosed as esophageal cancer, which was incorrectly detected and classified by the convolutional neural network. As illustrated in FIG. 31F, the image resulting from the analysis shows the rectangular frame (180) indicating a location of the lesion (band) estimated by the convolutional neural network, a name of the lesion (superficial esophageal cancer) and a probability classification (0.83). [0149] [0149] As illustrated in FIG. 30, moreover, in 10 images of false negatives (25%) among 41 images of false negatives, as a result of being misdiagnosed as inflammation of the background mucosa by the convolutional neural network, the lesion is considered not to have been detected as a true esophageal cancer. Furthermore, in 7 false negative images (17%), due to the blurred image of squamous cell carcinoma irradiated with narrow band light NBI, squamous cell carcinoma is considered to have not been detected as a true esophageal cancer by the network convolutional neural. [0150] [0150] Furthermore, in 4 images of false negatives (10%), Barrett's esophageal adenocarcinoma was present, but is considered to have not been detected as a true esophageal cancer due to insufficient learning regarding adenocarcinoma. Furthermore, in 20 false negative images (49%), the lesions are considered to have not been detected as true esophageal cancers by the convolutional neural network, since the lesions were difficult to diagnose, such as a lesion appearing at the bottom of an endoscopic image or just a part of a lesion present in an endoscopic image. [0151] [0151] FIG. 32A is a diagram illustrating, as a false negative image, an endoscopic image containing an esophageal cancer that was not detected by the convolutional neural network, since the lesion appeared at the bottom of the endoscopic image and was difficult to diagnose. The rectangular frame (182) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0152] [0152] FIG. 32B is a diagram illustrating, as a false negative image, an endoscopic image containing an esophageal cancer that was not detected by the convolutional neural network, since as of the lesion only a part of it was present in the endoscopic image, it was difficult to diagnose. The rectangular frame (184) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0153] [0153] FIG. 32C is a diagram illustrating, as a false negative image, an endoscopic image containing an esophageal cancer that was not detected by the convolutional neural network as a result of having been misdiagnosed as inflammation of the background mucosa. The rectangular frame (186) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0154] [0154] FIG. 32D is a diagram illustrating, as a false negative image, an endoscopic image containing an esophageal cancer that was not detected by the convolutional neural network due to the blurred image of squamous cell carcinoma irradiated with NBI narrowband light. The rectangular frame (188) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0155] [0155] FIG. 32E is a diagram illustrating, as a false negative image, an endoscopic image containing esophageal cancer (Barrett's esophageal adenocarcinoma) that was not detected by the convolutional neural network due to insufficient learning about adenocarcinoma, although Barrett's esophageal adenocarcinoma was present. The rectangular frame (190) indicates, for reference, a lesion site (band) of a histologically proven esophageal cancer and is not shown in the image resulting from the actual analysis. [0156] [0156] As indicated from the results of the third assessment test described above, the convolutional neural network effectively detects esophageal cancer with considerable accuracy at a remarkable speed even if the esophageal cancer is small and is likely to be useful in reducing the detection failures of esophageal cancer in esophageal endoscopy. It is also indicated that the convolutional neural network is able to correctly classify detected esophageal cancers and strongly assist an endoscopist in the diagnosis based on endoscopic images. It is considered that more learning processes are carried out in the convolutional neural network in order to obtain a higher diagnostic procession. [0157] [0157] Japanese Patent Application No. 2017-209232, filed on October 30, 2017, Japanese Patent Application No. 2018-007967, filed on January 22, 2018 and Japanese Patent Application No. 2018-038828, filed as of March 5, 2018, including the specification, drawings and summary, are incorporated herein as a reference in their entirety. [0158] [0158] The present invention is suitable for use in a device for supporting diagnostic imaging, a method of data collection, a method of supporting diagnostic imaging, and a program for supporting diagnostic imaging. who are able to assist an endoscopist in diagnosing with endoscopic images. [0159] [0159] 10 Endoscopic imaging section 20 Injury estimation section 30 Presentation control section 40 Learning apparatus 100 Diagnostic imaging support device 101 CPU 102 ROM 103 RAM 104 External storage device 105 Communication interface 200 Endoscopic image capture device 300 Presentation device D1 Endoscopic image data D2 Estimation results data D3 Image data analysis result D4 Teaching data
权利要求:
Claims (12) [1] 1. Apparatus to support diagnostic imaging (100), characterized by the fact that it comprises: an injury estimation section (20) that estimates a name and location of an injury present in an endoscopic image of an individual's digestive organ and information regarding the certainty of the name and location of the lesion using a convolutional neural network, the endoscopic image of the digestive organ being captured by an endoscopic image capture device (200) of the digestive organ; and a presentation control section (30) that performs the control in order to generate an image resulting from the analysis showing the name and location of the injury and the certainty regarding the name and location of the injury and to present the image resulting from the analysis in the endoscopic image of the digestive organ, where the convolutional neural network undergoes a learning process based on the lesion names and lesion lesion sites present in a plurality of endoscopic images of the digestive organ tumor, the lesion names and injury lesion sites being determined in advance by obtaining atrophy characteristic, intestinal metaplasia, swelling or depression of the mucosa and a condition of the mucosal color tones. [2] 2. Apparatus to support the diagnostic image formation (100) according to claim 1, characterized by the fact: from the presentation control section (30) changing a style of presentation of the injury site information identifying the injury site in the image result of analysis according to certainty. [3] 3. Apparatus to support diagnostic imaging (100) according to claim 1 or 2, characterized by the fact: of the plurality of endoscopic images of the digestive organ tumor include an endoscopic image captured with white light illumination of an individual's digestive organ. [4] Apparatus for supporting diagnostic imaging (100) according to any one of claims 1 to 3, characterized by the fact that: the plurality of endoscopic images of the digestive organ tumor include an endoscopic image captured with a dye applied to an organ digestive system. [5] 5. Apparatus for supporting diagnostic imaging (100) according to any one of claims 1 to 4, characterized by the fact that: the plurality of endoscopic images of the digestive organ tumor include an endoscopic image captured with narrow band light illumination of an individual's digestive organ. [6] Apparatus for supporting diagnostic imaging (100) according to any one of claims 1 to 5, characterized by the fact that the digestive organ includes a stomach. [7] Apparatus for supporting diagnostic imaging (100) according to any one of claims 1 to 5, characterized in that the digestive organ includes an esophagus. [8] Apparatus for supporting diagnostic imaging (100) according to any one of claims 1 to 5, characterized in that the digestive organ includes a duodenum. [9] Apparatus for supporting diagnostic imaging (100) according to any one of claims 1 to 5, characterized in that the digestive organ includes a large intestine. [10] 10. Method of data collection characterized by the fact that it is for the collection, using the support device for the formation of diagnostic image (100) as defined in any of claims 1 to 9, of a result shown in the presentation control section (30) as data related to an injury of the gastrointestinal tract to an individual's gastrointestinal tract. [11] 11. Method of support to the formation of diagnostic image characterized by the fact of using a device, the device including: a section of injury estimation (20) that estimates a name and a location of an injury present in an endoscopic image of the digestive organ of an individual and information regarding the certainty of the name and location of the lesion using a convolutional neural network, the endoscopic image of the digestive organ being captured by an endoscopic image capture device (200) of the digestive organ, and a presentation control section (30) who performs the control in order to generate an image resulting from the analysis showing the name and location of the injury and the certainty of the name and location of the injury and to show the image resulting from the analysis on the endoscopic image of the digestive organ, and the method of support to the formation of diagnostic image comprising: the submission of the convolutional neural network to a learning process based on the names of the lesion and lesion locations of lesions present in a plurality of endoscopic images of the tumor of the digestive organ, the names of the lesion and the lesion lesion sites being determined in advance by obtaining atrophy characteristic, intestinal metaplasia, swelling or depression of the mucosa and a color tone condition mucosa. [12] 12. Diagnostic imaging support program characterized by making a computer perform: a process of estimating a name and location of an injury present in an endoscopic image of an individual's digestive organ and information about certainty the name and location of the lesion using a convolutional neural network, the endoscopic image of the digestive organ being captured by an endoscopic image capture device (200) of the digestive organ; and a process to perform the control in order to generate an image resulting from the analysis showing the name and location of the injury and the certainty of the name and location of the injury and to show the image resulting from the analysis on the endoscopic image, where: the convolutional neural network undergoes a learning process based on the names of the lesion and lesion lesion sites present in a plurality of endoscopic images of the digestive organ tumor, the lesion names and lesion lesion sites being determined with advance by obtaining atrophy characteristic, intestinal metaplasia, swelling or depression of the mucosa and a condition of the mucosal color tones.
类似技术:
公开号 | 公开日 | 专利标题 BR112020008774A2|2020-12-22|APPARATUS TO ASSIST IN IMAGING DIAGNOSIS, METHOD FOR DATA COLLECTION, METHOD TO ASSIST IN IMAGING DIAGNOSIS AND PROGRAM TO ASSIST IN IMAGING DIAGNOSIS Cai et al.2019|Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma | JP6875709B2|2021-05-26|A computer-readable recording medium that stores a disease diagnosis support method, a diagnosis support system, a diagnosis support program, and this diagnosis support program using endoscopic images of the digestive organs. WO2020105699A9|2020-07-09|Disease diagnostic assistance method based on digestive organ endoscopic images, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium having diagnostic assistance program stored thereon Igarashi et al.2020|Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet Shimamoto et al.2020|Real-time assessment of video images for esophageal squamous cell carcinoma invasion depth using artificial intelligence Ali et al.2020|Endoscopy disease detection challenge 2020 CN113496489B|2021-12-24|Training method of endoscope image classification model, image classification method and device WO2021054477A2|2021-03-25|Disease diagnostic support method using endoscopic image of digestive system, diagnostic support system, diagnostic support program, and computer-readable recording medium having said diagnostic support program stored therein Suzuki et al.2021|Artificial intelligence for cancer detection of the upper gastrointestinal tract WO2019245009A1|2019-12-26|Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon CN112584749A|2021-03-30|Method for assisting diagnosis of disease based on endoscopic image of digestive organ, diagnosis assisting system, diagnosis assisting program, and computer-readable recording medium storing the diagnosis assisting program Li et al.2021|Intelligent detection endoscopic assistant: An artificial intelligence-based system for monitoring blind spots during esophagogastroduodenoscopy in real-time CN110600122A|2019-12-20|Digestive tract image processing method and device and medical system JP7017198B2|2022-02-08|A computer-readable recording medium that stores a disease diagnosis support method, diagnosis support system, diagnosis support program, and this diagnosis support program using endoscopic images of the digestive organs. JP7037220B2|2022-03-16|A computer-readable recording medium that stores a disease diagnosis support system using endoscopic images of the digestive organs, a method of operating the diagnosis support system, a diagnosis support program, and this diagnosis support program. WO2021206170A1|2021-10-14|Diagnostic imaging device, diagnostic imaging method, diagnostic imaging program, and learned model WO2021220822A1|2021-11-04|Diagnostic imaging device, diagnostic imaging method, diagnostic imaging program, and learned model TWM620109U|2021-11-21|Aid judgment system Kho et al.2016|Gastrointestinal endoscopy colour-based image processing technique for bleeding, lesion and reflux WO2020121906A1|2020-06-18|Medical assistance system, medical assistance device, and medical assistance method Cychnerski et al.2022|ERS: a novel comprehensive endoscopy image dataset for machine learning, compliant with the MST 3.0 specification WO2020256568A1|2020-12-24|Method for real-time detection of objects, structures or patterns in a video, an associated system and an associated computer readable medium CN113573654A|2021-10-29|AI system for detecting and determining lesion size
同族专利:
公开号 | 公开日 CN111655116A|2020-09-11| EP3705025A1|2020-09-09| WO2019088121A1|2019-05-09| JP2020073081A|2020-05-14| EP3705025A4|2021-09-08| KR20200106028A|2020-09-10| TW201922174A|2019-06-16| SG11202003973VA|2020-05-28| US20200337537A1|2020-10-29| JPWO2019088121A1|2019-11-14| JP6657480B2|2020-03-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP4450973B2|2000-11-30|2010-04-14|オリンパス株式会社|Diagnosis support device| JP2007280229A|2006-04-11|2007-10-25|Fujifilm Corp|Similar case retrieval device, similar case retrieval method and program| JP5675106B2|2006-11-15|2015-02-25|シーエフピーエイチ, エル.エル.シー.|Apparatus and method for determining a gaming machine in communication with a game server| JP5802440B2|2011-06-02|2015-10-28|オリンパス株式会社|Fluorescence observation equipment| JP6552613B2|2015-05-21|2019-07-31|オリンパス株式会社|IMAGE PROCESSING APPARATUS, OPERATION METHOD OF IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING PROGRAM| JP6528608B2|2015-08-28|2019-06-12|カシオ計算機株式会社|Diagnostic device, learning processing method in diagnostic device, and program| JP6545591B2|2015-09-28|2019-07-17|富士フイルム富山化学株式会社|Diagnosis support apparatus, method and computer program| JP6495539B2|2016-03-29|2019-04-03|富士フイルム株式会社|Image processing apparatus, method of operating image processing apparatus, and image processing program| WO2017175282A1|2016-04-04|2017-10-12|オリンパス株式会社|Learning method, image recognition device, and program| JP6401737B2|2016-05-24|2018-10-10|株式会社三共|Game machine| JP6811045B2|2016-07-15|2021-01-13|株式会社三共|Game machine|JP6818424B2|2016-04-13|2021-01-20|キヤノン株式会社|Diagnostic support device, information processing method, diagnostic support system and program| WO2020008834A1|2018-07-05|2020-01-09|富士フイルム株式会社|Image processing device, method, and endoscopic system| JPWO2020174747A1|2019-02-26|2021-12-02|富士フイルム株式会社|Medical image processing equipment, processor equipment, endoscopic systems, medical image processing methods, and programs| JP2021012570A|2019-07-08|2021-02-04|株式会社日立製作所|Fracture surface analysis device and fracture surface analysis method| WO2021033303A1|2019-08-22|2021-02-25|Hoya株式会社|Training data generation method, learned model, and information processing device| WO2021054477A2|2019-09-20|2021-03-25|株式会社Aiメディカルサービス|Disease diagnostic support method using endoscopic image of digestive system, diagnostic support system, diagnostic support program, and computer-readable recording medium having said diagnostic support program stored therein| TWI726459B|2019-10-25|2021-05-01|中國醫藥大學附設醫院|Transfer learning aided prediction system, method and computer program product thereof| WO2021095446A1|2019-11-11|2021-05-20|富士フイルム株式会社|Information display system and information display method| EP3846477A1|2020-01-05|2021-07-07|Isize Limited|Preprocessing image data| TWI725716B|2020-01-21|2021-04-21|雲象科技股份有限公司|Endoscope detection system and method| WO2021205777A1|2020-04-08|2021-10-14|富士フイルム株式会社|Processor device and operation method for same| WO2021220822A1|2020-04-27|2021-11-04|公益財団法人がん研究会|Diagnostic imaging device, diagnostic imaging method, diagnostic imaging program, and learned model| WO2021220279A1|2020-05-01|2021-11-04|Given Imaging Ltd.|Systems and methods for selecting images of event indicators| WO2021240656A1|2020-05-26|2021-12-02|日本電気株式会社|Image processing device, control method, and storage medium| KR20220006292A|2020-07-08|2022-01-17|주식회사 메가젠임플란트|Apparatus for Generating Learning Data and Driving Method Thereof, and Computer Readable Recording Medium| WO2022014235A1|2020-07-14|2022-01-20|富士フイルム株式会社|Image analysis processing device, endoscopy system, operation method for image analysis processing device, and program for image analysis processing device| KR102222547B1|2020-07-15|2021-03-04|주식회사 웨이센|AI -based colonoscopy image analysis method| KR102255311B1|2020-08-10|2021-05-24|주식회사 웨이센|AI based gastroscope image analysis method| KR102262684B1|2020-11-13|2021-06-09|주식회사 웨이센|Image Processing Method based on Artificial Intelligence of Image Receiving Device| CN112426119B|2021-01-26|2021-04-13|上海孚慈医疗科技有限公司|Endoscope screening processing method and device| CN113456002A|2021-07-15|2021-10-01|显微智能科技有限公司|In-vivo cancer cell positioning device and method|
法律状态:
2021-12-14| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JP2017209232|2017-10-30| JP2017-209232|2017-10-30| JP2018-007967|2018-01-22| JP2018007967|2018-01-22| JP2018038828|2018-03-05| JP2018-038828|2018-03-05| PCT/JP2018/040381|WO2019088121A1|2017-10-30|2018-10-30|Image diagnosis assistance apparatus, data collection method, image diagnosis assistance method, and image diagnosis assistance program| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|